Reweighted l1 Dual Averaging Approach for Sparse Stochastic Learning
نویسندگان
چکیده
Recent advances in stochastic optimization and regularized dual averaging approaches revealed a substantial interest for a simple and scalable stochastic method which is tailored to some more specific needs. Among the latest one can find sparse signal recovery and l0-based sparsity inducing approaches. These methods in particular can force many components of the solution shrink to zero thus clarifying the importance of the features and simplifying the evaluation. In this paper we concentrate on enhancing sparsity of the recently proposed l1 Regularized Dual Averaging (RDA) method with a simple reweighting iterative procedure which in a limit applies the l0-norm penalty. We present some theoretical justifications of a bounded regret for a sequence of convex repeated games where every game stands for a separate reweighted l1-RDA problem. Numerical results show an enhanced sparsity of the proposed approach and some improvements over the l1-RDA method in generalization error.
منابع مشابه
Reweighted ℓ1ℓ1 minimization method for stochastic elliptic differential equations
We consider elliptic stochastic partial differential equations (SPDEs) with random coefficients and solve them by expanding the solution using generalized polynomial chaos (gPC). Under some mild conditions on the coefficients, the solution is ‘‘sparse’’ in the random space, i.e., only a small number of gPC basis makes considerable contribution to the solution. To exploit this sparsity, we emplo...
متن کاملDual Averaging Method for Regularized Stochastic Learning and Online Optimization
We consider regularized stochastic learning and online optimization problems, where the objective function is the sum of two convex terms: one is the loss function of the learning task, and the other is a simple regularization term such as l1-norm for promoting sparsity. We develop a new online algorithm, the regularized dual averaging (RDA) method, that can explicitly exploit the regularizatio...
متن کاملManifold Identification for Regularized Stochastic Online Learning Manifold Identification in Dual Averaging for Regularized Stochastic Online Learning
Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term whose purpose is to induce structure in the solution, the solution often lies on a low-dimensional manifold of parameter space along w...
متن کاملManifold Identification in Dual Averaging for Regularized Stochastic Online Learning
Iterative methods that calculate their steps from approximate subgradient directions have proved to be useful for stochastic learning problems over large and streaming data sets. When the objective consists of a loss function plus a nonsmooth regularization term, the solution often lies on a lowdimensional manifold of parameter space along which the regularizer is smooth. (When an l1 regularize...
متن کاملA comparison of typical ℓp minimization algorithms
Recently, compressed sensing has been widely applied to various areas such as signal processing, machine learning, and pattern recognition. To find the sparse representation of a vector w.r.t. a dictionary, an l1 minimization problem, which is convex, is usually solved in order to overcome the computational difficulty. However, to guarantee that the l1 minimizer is close to the sparsest solutio...
متن کامل